|
Optimal control theory, an extension of the calculus of variations, is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and his collaborators in the Soviet Union〔L. S. Pontryagin, 1962. ''The Mathematical Theory of Optimal Processes''.〕 and Richard Bellman in the United States. Optimal control can be seen as a control strategy in control theory. ==General method== Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost functional. The optimal control can be derived using Pontryagin's maximum principle (a necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's Principle〔I. M. Ross, 2009. ''A Primer on Pontryagin's Principle in Optimal Control'', Collegiate Publishers. ISBN 978-0-9843571-0-9.〕), or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition). We begin with a simple example. Consider a car traveling on a straight line through a hilly road. The question is, how should the driver press the accelerator pedal in order to ''minimize'' the total traveling time? Clearly in this example, the term ''control law'' refers specifically to the way in which the driver presses the accelerator and shifts the gears. The ''system'' consists of both the car and the road, and the ''optimality criterion'' is the minimization of the total traveling time. Control problems usually include ancillary constraints. For example the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc. A proper cost functional is a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and initial conditions of the system. It is often the case that the constraints are interchangeable with the cost function. Another optimal control problem is to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another control problem is to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel. A more abstract framework goes as follows. Minimize the continuous-time cost functional : subject to the first-order dynamic constraints (the state equation) : the algebraic ''path constraints'' : and the boundary conditions : where is the ''state'', is the ''control'', is the independent variable (generally speaking, time), is the initial time, and is the terminal time. The terms and are called the ''endpoint cost '' and ''Lagrangian'', respectively. Furthermore, it is noted that the path constraints are in general ''inequality'' constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution to the optimal control problem is ''locally minimizing''. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「optimal control」の詳細全文を読む スポンサード リンク
|